Private Weighted Random Walk Stochastic Gradient Descent

نویسندگان

چکیده

We consider a decentralized learning setting in which data is distributed over nodes graph. The goal to learn global model on the without involving any central entity that needs be trusted. While gossip-based stochastic gradient descent (SGD) can used achieve this objective, it incurs high communication and computation costs, since has wait for all local models at converge. To speed up convergence, we propose instead study random walk based SGD updated two algorithms types of walks achieve, way, uniform sampling importance data. provide non-asymptotic analysis rate taking into account constants related Our numerical results show weighted algorithm better performance high-variance Moreover, privacy-preserving achieves differential privacy Gamma noise mechanism propose. also give convergence outperforms additive Laplace-based mechanisms.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust Decentralized Differentially Private Stochastic Gradient Descent

Stochastic gradient descent (SGD) is one of the most applied machine learning algorithms in unreliable large-scale decentralized environments. In this type of environment data privacy is a fundamental concern. The most popular way to investigate this topic is based on the framework of differential privacy. However, many important implementation details and the performance of differentially priv...

متن کامل

Batched Stochastic Gradient Descent with Weighted Sampling

We analyze a batched variant of Stochastic Gradient Descent (SGD) with weighted sampling distribution for smooth and non-smooth objective functions. We show that by distributing the batches computationally, a significant speedup in the convergence rate is provably possible compared to either batched sampling or weighted sampling alone. We propose several computationally efficient schemes to app...

متن کامل

A Differentially Private Stochastic Gradient Descent Algorithm for Multiparty Classification

We consider the problem of developing privacypreserving machine learning algorithms in a distributed multiparty setting. Here different parties own different parts of a data set, and the goal is to learn a classifier from the entire data set without any party revealing any information about the individual data points it owns. Pathak et al [7] recently proposed a solution to this problem in whic...

متن کامل

Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm

We obtain an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives, improving from a quadratic dependence on the conditioning (L/μ) (where L is a bound on the smoothness and μ on the strong convexity) to a linear dependence on L/μ. Furthermore, we show how reweighting the sampling distribution (i.e. importance samplin...

متن کامل

Variational Stochastic Gradient Descent

In Bayesian approach to probabilistic modeling of data we select a model for probabilities of data that depends on a continuous vector of parameters. For a given data set Bayesian theorem gives a probability distribution of the model parameters. Then the inference of outcomes and probabilities of new data could be found by averaging over the parameter distribution of the model, which is an intr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE journal on selected areas in information theory

سال: 2021

ISSN: ['2641-8770']

DOI: https://doi.org/10.1109/jsait.2021.3052975